Goto

Collaborating Authors

 combat model


What artificial intelligence might teach us about the origin of human language

Kilpatrick, Alexander

arXiv.org Artificial Intelligence

This study explores an interesting pattern emerging from research that combines artificial intelligence with sound symbolism. In these studies, supervised machine learning algorithms are trained to classify samples based on the sounds of referent names. Machine learning algorithms are efficient learners of sound symbolism, but they tend to bias one category over the other. The pattern is this: when a category arguably represents greater threat, the algorithms tend to overpredict to that category. A hypothesis, framed by error management theory, is presented that proposes that this may be evidence of an adaptation to preference cautious behaviour. This hypothesis is tested by constructing extreme gradient boosted (XGBoost) models using the sounds that make up the names of Chinese, Japanese and Korean Pokemon and observing classification error distribution.


Uriarte

AAAI Conferences

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats.


Automatic Learning of Combat Models for RTS Games

Uriarte, Alberto (Drexel University) | Ontañón, Santiago (Drexel University)

AAAI Conferences

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats.